Tight minimax rates for manifold estimation under Hausdorff loss
نویسندگان
چکیده
منابع مشابه
Manifold Estimation and Singular Deconvolution Under Hausdorff Loss
We find lower and upper bounds for the risk of estimating a man-ifold in Hausdorff distance under several models. We also show that there are close connections between manifold estimation and the problem of deconvolving a singular measure. 1. Introduction. Manifold learning is an area of intense research activity in machine learning and statistics. Yet a very basic question about manifold learn...
متن کاملMinimax Manifold Estimation
We find the minimax rate of convergence in Hausdorff distance for estimating a manifold M of dimension d embedded in R given a noisy sample from the manifold. Under certain conditions, we show that the optimal rate of convergence is n−2/(2+d). Thus, the minimax rate depends only on the dimension of the manifold, not on the dimension of the space in which M is embedded.
متن کاملVariational Minimax Estimation of Discrete Distributions under KL Loss
We develop a family of upper and lower bounds on the worst-case expected KL loss for estimating a discrete distribution on a finite numberm of points, given N i.i.d. samples. Our upper bounds are approximationtheoretic, similar to recent bounds for estimating discrete entropy; the lower bounds are Bayesian, based on averages of the KL loss under Dirichlet distributions. The upper bounds are con...
متن کاملMinimax estimation of multivariate normalmean under balanced loss function 1
This paper considers simultaneous estimation of multivariate normal mean vector using Zellner's(1994) balanced loss function when 2 is known and unknown. We show that the usual estimator X is minimax and obtain a class of minimax estimators which have uniformly smaller risk than the usual estimator X. Also, we obtain the proper Bayes estimator relative to balanced loss function and nd the minim...
متن کاملMinimax Estimation of Discrete Distributions under $\ell_1$ Loss
We consider the problem of discrete distribution estimation under l1 loss. We provide tight upper and lower bounds on the maximum risk of the empirical distribution (the maximum likelihood estimator), and the minimax risk in regimes where the support size S may grow with the number of observations n. We show that among distributions with bounded entropy H , the asymptotic maximum risk for the e...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Electronic Journal of Statistics
سال: 2015
ISSN: 1935-7524
DOI: 10.1214/15-ejs1039